Evaluating Measurement and Structural Invariance via Multi-Group Analysis

Tommaso Feraco

Invalid Date

Outline

  • Theoretical Background
  • R code
  • A real case study
  • Regressions
  • References

A hot topic

Introduction

COMMENTS?

The importance of Measurement Invariance

  • Researchers often compare groups of subjects on psychological variables … assuming that the adopted instruments similarly measure the same latent constructs across groups
    • Despite its appeal, this assumption is often not justified and needs to be tested to make comparisons across groups valid and interpretable

    • The assessment of Measurement Invariance is a prerequisite for meaningful comparisons across groups (or across time for the same groups)

…but not everyone (fully) agrees: doi:10.1080/10705511.2023.2191292

Invariance of a Structural Equation Model

More generally, testing for Invariance allows us to evaluate to what extent a hypothesized Structural Equation Model (SEM) can be considered invariant (i.e., having the same parameters) across groups

(Some) Applications

  • Evaluation of the psychometric properties of a psychological test or of a theoretical model on different sub-groups

For example, assessment of invariance across:

  -  gender
  -  age group
  -  pathological state
  -  culture, ethnicity, nationality

\end {itemize} - Longitudinal factorial invariance: the invariance of corresponding parameters across time within a group

Assessing Invariance:\ The Multi-Group Analysis

  • The Multi-group analysis is the most widely used method to assess invariance of a SEM model
    • In this lesson we will focus on a particular class of SEM models:

The Confirmatory Factor Analysis (CFA) models - In particular, we will adopt The Multi-Group Confirmatory Factor Analysis (MG-CFA) approach

Assessing Invariance: The starting point

Assessing Invariance: The idea

  • In MG analysis we start from a baseline situation in which the hypothesized CFA model is simultaneously estimated on all groups. At the beginning, all structural parameters are free to vary across groups
    • Next, more restrictive models are built in which some parameters (e.g., factor loadings) are constrained to be invariant across groups
    • The comparison of increasingly restrictive models allows to evaluate the increasing level of invariance between groups

Invariance steps:

  1. Configural Invariance: the structure of the latent variable(s) is the same across groups (\(g \ne g'\)) and/or over time (\(\xi_g = \xi_{g'}\))
    1. Metric Invariance (or Weak Invariance): factor loadings are equivalent across groups and/or over time (\(\Lambda_g = \Lambda_{g'}\))
    2. Scalar Invariance (or Strong Invariance): intercepts of observed variables are equivalent across groups and/or over time (\(\tau_g = \tau_{g'}\))
    3. Strict Invariance (or Residual or Invariant Uniqueness Invariance): residual variances of observed exogenous variables are equivalent across groups and/or over time (\(\Theta_{\delta,g} = \Theta_{\delta,g'}\))
    4. Scalar/Strong invariance is required for meaningful mean comparisons

Assessing Factorial Invariance: A step-by-step guide

# Invariance Constrained parameters Comparison model
0 Separete models - -
1 Configural None -
2 Metric \(\lambda_{ij}\) Configural
3 Scalar \(\lambda_{ij}\; , \tau_{i}\) Metric
4 Observed residual var. \(\lambda_{ij}\; , \tau_{i}\; , \theta_{ii}^{\delta}\) Scalar
5 Latent variances ${ij}; , {i}; , {ii}^{}; , {ii} $ Observed residual var.
6 Latent covariances ${ij}; , {i}; , {ii}^{}; , {ii}; , _{ij} $ Latent variances
7 Latent means \(\lambda_{ij}\; , \tau_{i}\; , \theta_{ii}^{\delta}\; , \phi_{ii}\; , \phi_{ij}\; , \kappa_{i}\) Latent covariances
-  **Steps from 1 to 4:** *Measurement Invariance*

-  **Steps from 5 to 7:** *Structural Invariance*

MG-CFA scheme (steps 1-4)

Step 0: Separate models for each group

Step 1: Configural invariance

Step 1: Configural (non-)invariance

Configural invariance means that the “form” of the models is the same in the groups of interest. Form entails both the number of latent variables and whether the loadings are non-zero to begin with.

Step 2: Metric invariance

Step 2: Metric (non-)invariance

Metric invariance means that for each item, the loading of the factor on the item is the same in the two groups (or, again more precisely, that we cannot reject the hypothesis that the loadings are the same).

Step 2: Metric (non-)invariance

The source of group differences does not come from the latent variable!

Step 3: Scalar invariance

Step 3: Scalar invariance

Scalar invariance means that for each item, the intercept is the same. This means that group differences in the item responses are fully accounted for by group differences in the latent construct.

Step 4: Invariance of observed residual variances

Residual invariance means that for each item, the residual variance—the variance of the ominous E pointing into the items—is the same. We can again phrase this statistically: if we regressed the item scores on the factor, then the variance of the remaining residual would be the same in the groups (i.e., there would be homoscedasticity).

Step 4: Invariance of observed residual variances

The thing about the residual is that it captures everything that’s not explained in the model, and explaining changes in the amount of unexplained things seems a bit futile. Residual invariance is often not tested because it’s not necessary for latent mean comparisons. It’s a bit of an anticlimactic level to end on.

Step 5: Invariance of latent variances

Step 6: Invariance of latent covariances

Step 7: Invariance of latent means

Evaluation of level of invariance

  • Analysis of fit indices of the considered invariance model (\(\chi^{2}\), \(RMSEA\), \(CFI\), \(NNFI\)):

A good fit supports the validity of invariance - Comparison between fit indices of the considered invariance model and a less restrictive model:

    -   $\Delta_{\chi^{2}}$  (strongly dependent on $n$)
    -  $\Delta_{CFI}$
    -  $\Delta_{BIC}$
    -  ...

A marked worsening of fit indices indicates that the considered invariance model is too restrictive and thus must be rejected

Note: It is strongly recommended to make a comprehensive evaluation based on different fit indices, rather than on a single fit criterion

Some general indications on model comparison

Let:

-  $MOD_{A}$ be a model of invariance taken as reference model
-  $MOD_{B}$ be a more restrictive model of invariance than $MOD_{A}$

We will have:

-  $\Delta_{\chi^{2}_{BA}} = \chi^{2}_{MOD_{B}} - \chi^{2}_{MOD_{A}} \sim \chi^{2}_{BA}$ with $df$ equal to $df_{Mod_{B}} - df_{Mod_{A}}$  

If the \(p-value\) associated with \(\chi^{2}_{BA}\) is less than a critical \(\alpha\) value then \({MOD_{B}}\) must be rejected Always assume \(\alpha_{CRITICAL}=.05\) is not reasonable - \(\Delta_{CFI_{BA}} = CFI_{MOD_B}-CFI_{MOD_A}\) If \(\Delta_{CFI_{BA}} > -.01\) then \({MOD_{B}}\) can be accepted - \(\Delta_{BIC_{BA}} = BIC_{MOD_B}-BIC_{MOD_A}\) If ${BIC{BA}} < 0 $ then \({MOD_{B}}\) is more plausible than \({MOD_{A}}\)

Partial invariance

When model invariance is untenable at a certain level (scalar, metric, or residual) we can determine what specific indicator(s) contribute to the misfit. Partial invariance is when most but not all parameters are constrained to be invariant. If partial invariance exists at a given level for a model, there are a variety of ways to proceed:

1.  Leave the non-invariant indicator variables in the model, but not constrain them to be invariant across groups, arguing that the invariant indicators are sufficient to establish comparability of the constructs.
2.  Argue that the differences between indicator variables are small enough that they would not make a substantive difference and proceed with invariance constraints in place.
3.  Remove the indicator variables that are not fully invariant, and then re-run the invariance assessment.
4.  Conclude that because there is not full invariance, the indicator variables must be measuring different constructs across the groups and, therefore, not use the indicators.

How can we recognize the parameters to ``free’’?

  • Inspection of parameters estimated separately for each group

The more the same parameter differs among groups, the more it will be plausible to free it - Inspection of Modification indices

    -  The *Modification Index* of a constrained parameter indicates the extent to which the model could improve if the parameter would be left free to vary across groups
    -  In general, we start from a more restrictive model and free one by one the parameters with higher Modification indices... until we obtain invariance

Note: Parameters left to vary freely across groups must be interpreted based on relevant theory

CFA hypothesized model

The Data

Data are in the data frame ``d’’

str(d)
'data.frame':   589 obs. of  8 variables:
 $ x1   : num  1.21 -1.34 1.05 1.02 2.51 ...
 $ x2   : num  0.725 -0.927 -1.037 1.953 1.823 ...
 $ x3   : num  1.153 -1.635 3.148 -0.183 2.449 ...
 $ x4   : num  2.185 -1.788 -0.101 0.978 0.77 ...
 $ x5   : num  0.668 -0.414 1.843 -1.994 -1.774 ...
 $ x6   : num  -1.0233 -0.4756 -0.0675 -0.1684 1.2472 ...
 $ x7   : num  1.046 -0.213 1.572 -1.513 0.194 ...
 $ Group: int  1 0 0 1 1 1 1 0 1 1 ...

The groups are composed by:

table(d$Group)

  0   1 
234 355 

Model building in R

This is the reference theoretical model that we want to evaluate

m <- "
f1 =~ x1 + x2 + x3 + x4
f2 =~ x5 + x6 + x7
"

Step 0: Separate models for each group

First of all, try to fit the model separately in the two groups.

This is not configural invariance! We are just testing if the model works in the two groups, not if the model has the same ‘form’ in the two groups.

# Fit the model in the two groups
m0<-cfa(m, data=d[d$Group==0,])
m1<-cfa(m, data=d[d$Group==1,])

# Evaluating parameters and goodness-of-fit indices
summary(m0,fit.measures=TRUE)
summary(m1,fit.measures=TRUE)

Step 1: Configural invariance

Fitting a configural model is very easy: just add group=“groupVariableName” to the cfa sintax.

# Fit the configural model
m.conf<-cfa(m, data=d, group = "Group")

# Evaluating parameters and goodness-of-fit indices
summary(m.conf)
fitmeasures(m.conf, fit.measures = fi)

# Do they decrease compared to the full model?

Step 2: Metric invariance

Adding metric invariance to the model means fixing the loadings to force them to be equal in the two groups.

# Fit the model for metric invariance
m.metr<-cfa(m, data=d, group = "Group",
            group.equal = "loadings")

# Evaluating parameters and goodness-of-fit indices
summary(m.metr)
fitmeasures(m.metr, fit.measures = fi)
# Comparison between metric and configural invariance (delta chi)
anova(m.metr,m.conf)
# Comparison between metric and configural invariance (delta CFI)
fitMeasures(m.metr,"cfi") - fitMeasures(m.conf,"cfi")
# Comparison between metric and configural invariance (delta BIC)
fitMeasures(m.metr,"bic")-fitMeasures(m.conf,"bic")
# Inspection of the modification indices
lavTestScore(m.metr)
parameterTable(m.metr)

Steps 3-7: Evaluation of different invariance models

Through the option group.equal , it is possible to constrain groups of parameters to be equal across groups in order to assess increasingly restrictive invariance hypotheses:

Constrained parameters In R
Factor loadings loadings
Intercepts of manifest variables intercepts
Residual variances of manifest variables residuals
Residual covariances of manifest variables residual.covariances
Residual variances of latent variables lv.variances
Residual covariances of latent variable lv.covariances
Intercepts/means of latent variables means
All regression coefficients regressions
# Example: Model for assessing scalar invariance
m.scal<-cfa(m,d,group="Group",
            group.equal=c("loadings","intercepts"))

A magic

library(semTools)
measurementInvariance(model = model,
                      d,group="Group",
                      fit.measures = fi)

This is DEPRECATED from the authors of the package and will be deleted from future versions of semTools.

You can use it exploratorily, but you cannot:

  • follow and interpret the estimates step by step
  • model partial invariance!

Let’s see what partial invariance is.

Models of partial invariance

Through the option group.partial , you can test for partial invariance by allowing a few parameters to remain free:

# After the inspection of MI, we decided to estimate a model
# of partial metric invariance in which the loadgin of the
# item x5 is free to vary between groups:
m.metr<-cfa(m,data=d,group="Group",
            group.equal="loadings",
            group.partial="f2 =~ x5")

What should we do now?

Should we compare this model with what?

How can we interpret the results?

… Now it’s time for a real case study!

The case study

*SEM work with variance-covariance matrices. For this reason, it is sufficient to find it in the original articles to use their “data”!. The data we will use have been generated based on the parameters provided in the article, modifying the sample size. How? Open dataGen.R* in the \texttt{'data' folder}

Theoretical model and reference groups

-  Group A: Youths With Manics  Symptoms ($n=150$)
-  Group B: Control Group ($n=150$)

The Data (On MOODLE: dmg.RData)

rm(list=ls())
# Loading useful packages:
library(lavaan) ; library(semTools) ; library (semPlot)
load("../data/dmg.RData")
str(dmg)
'data.frame':   300 obs. of  10 variables:
 $ id       : int  1 2 3 4 5 6 7 8 9 10 ...
 $ diagnosis: Factor w/ 2 levels "manic","norming": 1 1 1 1 1 1 1 1 1 1 ...
 $ Info     : num  8.95 11.94 5.8 14.69 6 ...
 $ Sim      : num  9.34 9.93 6.64 17.72 6.56 ...
 $ Vocab    : num  12.39 4.57 6.03 13.21 7.83 ...
 $ Comp     : num  11.61 8.86 5.03 13.38 5.77 ...
 $ PicComp  : num  11.15 4.95 8.02 11.54 8.84 ...
 $ PicArr   : num  15.7 5.46 7.65 12.06 5.39 ...
 $ BlkDsgn  : num  11.45 3.43 9.28 10.46 7.39 ...
 $ ObjAsmb  : num  15.54 3.8 8.63 14.38 9.92 ...

Evaluation of multivariate normality

library(QuantPsyc)
mult.norm(dmg[,3:10])$mult.test # (Mardia, 1970)
          Beta-hat       kappa     p-val
Skewness  2.110397 105.5198437 0.8242484
Kurtosis 79.118959  -0.6032073 0.5463708
# We can assume multivariate normality
#   and therefore use, in SEM,
#   the Maximum Likelihood Method

Theoretical model in R

model<-"gc =~ Info + Sim + Vocab + Comp
        gv =~ PicComp + PicArr + BlkDsgn + ObjAsmb"

Separate models for each group

# Separate models
m.man<-cfa(model,data=dmg[dmg$diagnosis=="manic",])
m.nor<-cfa(model,data=dmg[dmg$diagnosis=="norming",])
# Inspection of fit indices
fitMeasures(m.man,c("chisq","df","rmsea","cfi","nnfi"))
 chisq     df  rmsea    cfi   nnfi 
54.052 19.000  0.111  0.949  0.924 
fitMeasures(m.nor,c("chisq","df","rmsea","cfi","nnfi"))
 chisq     df  rmsea    cfi   nnfi 
18.151 19.000  0.000  1.000  1.003 
# ANY COMMENTS?

Configural Invariance

m.conf<-cfa(model,data=dmg,group="diagnosis")
# Inspection of fit indices
fitMeasures(m.conf,c("chisq","df","rmsea","cfi","nnfi"))
 chisq     df  rmsea    cfi   nnfi 
72.203 38.000  0.077  0.971  0.957 
# ANY COMMENTS?

Metric invariance (1)

# Model of metric invariance
m.metr<-cfa(model,dmg,group="diagnosis",group.equal="loadings")
# Inspection of fit indices
fitMeasures(m.metr,c("chisq","df","rmsea","cfi","nnfi"))
 chisq     df  rmsea    cfi   nnfi 
88.153 44.000  0.082  0.963  0.952 

Metric invariance (2)

#### Metric vs. Configural invariance:
anova(m.metr,m.conf) # (delta chi-square)

Chi-Squared Difference Test

       Df   AIC   BIC  Chisq Chisq diff   RMSEA Df diff Pr(>Chisq)  
m.conf 38 11239 11424 72.203                                        
m.metr 44 11243 11406 88.153      15.95 0.10515       6    0.01402 *
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
fitMeasures(m.metr,"cfi")-fitMeasures(m.conf,"cfi") # (delta cfi)
   cfi 
-0.008 
fitMeasures(m.metr,"bic")-fitMeasures(m.conf,"bic") # (delta BIC)
    bic 
-18.272 
# ANY COMMENTS?

Scalar invariance (1)

# Model of Scalar invariance
m.scal<-cfa(model,dmg,group="diagnosis",
            group.equal=c("loadings","intercepts"))
# Inspection of fit indices
fitMeasures(m.scal,c("chisq","df","rmsea","cfi","nnfi"))
  chisq      df   rmsea     cfi    nnfi 
144.667  50.000   0.112   0.920   0.910 
# ANY COMMENTS?

Scalar invariance (2)

# Evaluation of Scalar invariance
fitMeasures(m.scal,c("chisq","df","rmsea","cfi","nnfi"))
  chisq      df   rmsea     cfi    nnfi 
144.667  50.000   0.112   0.920   0.910 
anova(m.scal,m.metr)

Chi-Squared Difference Test

       Df   AIC   BIC   Chisq Chisq diff   RMSEA Df diff Pr(>Chisq)    
m.metr 44 11243 11406  88.153                                          
m.scal 50 11287 11428 144.667     56.513 0.23691       6  2.292e-10 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
fitMeasures(m.scal,"cfi")-fitMeasures(m.metr,"cfi")
   cfi 
-0.043 
fitMeasures(m.scal,"bic")-fitMeasures(m.metr,"bic")
   bic 
22.291 
# ANY COMMENTS?  ... Global Scalar invariance is not satisfactory
# ... let's try with Partial Scalar invariance

Inspection of equality constraints

lavTestScore(m.scal)$uni

univariate score tests:

     lhs op   rhs     X2 df p.value
1   .p2. == .p31.  0.419  1   0.517
2   .p3. == .p32.  0.335  1   0.563
3   .p4. == .p33.  2.389  1   0.122
4   .p6. == .p35.  7.026  1   0.008
5   .p7. == .p36.  0.072  1   0.789
6   .p8. == .p37.  0.000  1   0.988
7  .p20. == .p49.  8.342  1   0.004
8  .p21. == .p50. 42.173  1   0.000
9  .p22. == .p51.  2.691  1   0.101
10 .p23. == .p52. 11.089  1   0.001
11 .p24. == .p53.  2.042  1   0.153
12 .p25. == .p54.  3.555  1   0.059
13 .p26. == .p55.  1.018  1   0.313
14 .p27. == .p56.  3.018  1   0.082
# From a first analysis, we can see that the intercept of the variable "Sim"
# (constraint p21 == p50) has a high Modification Index
# ... freeing this parameter results in an improved fit
# (see parTable(m.scal), column id, to "find the meaning" of p21 and p50)
# Let's build a model of Partial Metric invariance by freeing the intercept of "Sim" ...

Partial Scalar invariance (1)

m.scal.P=cfa(model,dmg,group="diagnosis",
             group.equal=c("loadings","intercepts"),
             group.partial="Sim~1")
# Inspection of fit indices
fitMeasures(m.scal.P,c("chisq","df","rmsea","cfi","nnfi"))
  chisq      df   rmsea     cfi    nnfi 
100.174  49.000   0.083   0.957   0.950 
# ANY COMMENTS?
# What model can we now compare with the
# Partial Invariance model?

Partial Scalar invariance (2)

# Evaluation. Partial Scalar invariance
anova(m.scal.P,m.metr) # Note: Comparison model Metric invariance

Chi-Squared Difference Test

         Df   AIC   BIC   Chisq Chisq diff    RMSEA Df diff Pr(>Chisq)  
m.metr   44 11243 11406  88.153                                         
m.scal.P 49 11245 11389 100.174     12.021 0.096752       5     0.0345 *
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
fitMeasures(m.scal.P,"cfi")-fitMeasures(m.metr,"cfi")
   cfi 
-0.006 
fitMeasures(m.scal.P,"bic")-fitMeasures(m.metr,"bic")
    bic 
-16.498 
# Partial Scalar invariance is satisfactory
# and now becomes our reference model
# Question: How can we interpret this result?

Invariance of Residuals of observed variables(1)

# Note: parameter "Sim~1" remains free
m.rvo=cfa(model,dmg,group="diagnosis",
          group.equal=c("loadings","intercepts","residuals"),
          group.partial="Sim~1") #Note that this is still here
# Inspection of fit indices
fitMeasures(m.rvo,c("chisq","df","rmsea","cfi","nnfi"))
  chisq      df   rmsea     cfi    nnfi 
127.344  57.000   0.091   0.940   0.941 

Invariance of Residuals of observed variables (2)

# Evaluation of Invariance of Residuals of observed variables
anova(m.rvo,m.scal.P) # Note: Comparison model Partial Scalar invariance

Chi-Squared Difference Test

         Df   AIC   BIC  Chisq Chisq diff   RMSEA Df diff Pr(>Chisq)    
m.scal.P 49 11245 11389 100.17                                          
m.rvo    57 11256 11371 127.34     27.169 0.12639       8  0.0006609 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
fitMeasures(m.rvo,"cfi")-fitMeasures(m.scal.P,"cfi")
   cfi 
-0.016 
fitMeasures(m.rvo,"bic")-fitMeasures(m.scal.P,"bic")
    bic 
-18.461 
# The Invariance of Residuals of observed variables
#   is not satisfactory. Let's take a look at equality constraints

Inspection of equality constraints

lavTestScore(m.rvo)$uni
# (... see complete output )
# From a first analysis, we can see that
#  the residuals of variables  “Comp” and “PicComp”
#  have Modification indices that are
#  particularly high, so let's free them

Invariance of Residuals of observed variables (1)

# Partial Invariance of Residuals of observed variables
m.rvo.P=cfa(model,dmg,group="diagnosis",
            group.equal=c("loadings","intercepts","residuals"),
            group.partial=c("Sim~1","PicComp~~PicComp","Comp~~Comp"))
# Fit indices
fitMeasures(m.rvo.P,c("chisq","df","rmsea","cfi","nnfi"))
  chisq      df   rmsea     cfi    nnfi 
102.420  55.000   0.076   0.960   0.959 

Partial Invariance of Residuals of observed variables (2)

# Evaluation of Partial Invariance of Residuals of observed variables
anova(m.rvo.P,m.scal.P) # Note: Comparison model Partial Scalar invariance

Chi-Squared Difference Test

         Df   AIC   BIC  Chisq Chisq diff RMSEA Df diff Pr(>Chisq)
m.scal.P 49 11245 11389 100.17                                    
m.rvo.P  55 11235 11357 102.42     2.2458     0       6     0.8958
fitMeasures(m.rvo.P,"cfi")-fitMeasures(m.scal.P,"cfi")
  cfi 
0.003 
fitMeasures(m.rvo.P,"bic")-fitMeasures(m.scal.P,"bic")
    bic 
-31.977 
# Paartial Invariance of Residuals of observed variables
# is satisfactory.
# Question: What can we say about overall
# Measurement Invariance of the baseline theoretical model?

Invariance of Variance of latent variables (1)

m.vvl=cfa(model,dmg,group="diagnosis",
          group.equal=c("loadings","intercepts","residuals",
                        "lv.variances"),
          group.partial=c("Sim~1","PicComp~~PicComp","Comp~~Comp"))
# Fit indices
fitMeasures(m.vvl,c("chisq","df","rmsea","cfi","nnfi"))
  chisq      df   rmsea     cfi    nnfi 
105.825  57.000   0.076   0.959   0.959 

Invariance of Variance of latent variables (2)

# Evaluation of Invariance of latent variables
anova(m.vvl,m.rvo.P)

Chi-Squared Difference Test

        Df   AIC   BIC  Chisq Chisq diff    RMSEA Df diff Pr(>Chisq)
m.rvo.P 55 11235 11357 102.42                                       
m.vvl   57 11234 11349 105.83     3.4056 0.068449       2     0.1822
fitMeasures(m.vvl,"cfi")-fitMeasures(m.rvo.P,"cfi")
   cfi 
-0.001 
fitMeasures(m.vvl,"bic")-fitMeasures(m.rvo.P,"bic")
   bic 
-8.002 
# OK, this looks good!

Invariance of Covariance of latent variables (1)

m.cvl=cfa(model,dmg,group="diagnosis",
          group.equal=c("loadings","intercepts"
                        ,"residuals","lv.variances","lv.covariances"),
          group.partial=c("Sim~1","PicComp~~PicComp","Comp~~Comp"))
# Fit indices
fitMeasures(m.cvl,c("chisq","df","rmsea","cfi","nnfi"))
  chisq      df   rmsea     cfi    nnfi 
106.412  58.000   0.075   0.959   0.960 

Invariance of Covariance of latent variables(2)

# Evaluation of Invariance of Covariance of latent variables
anova(m.cvl,m.vvl)

Chi-Squared Difference Test

      Df   AIC   BIC  Chisq Chisq diff RMSEA Df diff Pr(>Chisq)
m.vvl 57 11234 11349 105.83                                    
m.cvl 58 11233 11344 106.41    0.58642     0       1     0.4438
fitMeasures(m.cvl,"cfi")-fitMeasures(m.vvl,"cfi")
cfi 
  0 
fitMeasures(m.cvl,"bic")-fitMeasures(m.vvl,"bic")
   bic 
-5.117 
# OK, we're almost there!

Invariance of Means of latent variables (1)

# last step
m.med=cfa(model,dmg,group="diagnosis",
          group.equal=c("loadings","intercepts"
                        ,"residuals","lv.variances","lv.covariances",
                        "means"),
          group.partial=c("Sim~1","PicComp~~PicComp","Comp~~Comp"))
# Fit indices
fitMeasures(m.med,c("chisq","df","rmsea","cfi","nnfi"))
  chisq      df   rmsea     cfi    nnfi 
110.933  60.000   0.075   0.957   0.960 

Invariance of Means of latent variables (2)

# Evaluation of Invariance of Means of latent variables
anova(m.med,m.cvl)

Chi-Squared Difference Test

      Df   AIC   BIC  Chisq Chisq diff    RMSEA Df diff Pr(>Chisq)
m.cvl 58 11233 11344 106.41                                       
m.med 60 11234 11337 110.93     4.5212 0.091673       2     0.1043
fitMeasures(m.med,"cfi")-fitMeasures(m.cvl,"cfi")
   cfi 
-0.002 
fitMeasures(m.med,"bic")-fitMeasures(m.cvl,"bic")
   bic 
-6.886 
# This last model is also satisfactory
# ANY COMMENTS?

Summing up

Models npar df chisq cfi tli nnfi agfi srmr rmsea bic aic
Manics 17 19 54.05 0.95 0.92 0.92 0.84 0.05 0.11 5590.65 5539.47
Norming 17 19 18.15 1 1 1 0.94 0.03 0 5718.64 5667.46
Configural 50 38 72.2 0.97 0.96 0.96 0.98 0.04 0.08 11424.11 11238.93
Metric 44 44 88.15 0.96 0.95 0.95 0.98 0.06 0.08 11405.84 11242.88
Scalar 38 50 144.67 0.92 0.91 0.91 0.97 0.08 0.11 11428.13 11287.39
Scalar Partial 39 49 100.17 0.96 0.95 0.95 0.98 0.07 0.08 11389.34 11244.9
Residual Variances 31 57 127.34 0.94 0.94 0.94 0.98 0.08 0.09 11370.88 11256.07
Residual Variances Partial 33 55 102.42 0.96 0.96 0.96 0.98 0.07 0.08 11357.37 11235.14
Latent Variances 31 57 105.83 0.96 0.96 0.96 0.98 0.09 0.08 11349.36 11234.55
Latent Covariances 30 58 106.41 0.96 0.96 0.96 0.98 0.09 0.07 11344.25 11233.13
Latent Means 28 60 110.93 0.96 0.96 0.96 0.98 0.09 0.08 11337.36 11233.66

Interpretations, comments, or questions?

To combine usefulness and fun …

Graphical representation of the Configural Invariance model with standardized parameters (attached R code)

Excercise: Graphically represent the former invariance models using the function semPaths of the package semPlot

Invariance of regression coefficients

Test of multigroup invariance can also be used to compare differences in the regression coefficients of two or more groups.

Invariance of regression coefficients

Following the same steps of the previous example, we can add the "regressions" to the group.equal argument.

fitPreg <- sem(path, d, group = "group",
               group.equal = "regressions")

You can apply this to a path model with manifest variables only or to a full SEM after measurement invariance is tested.

QUESTIONS? LET'S SEE THE "code"

Useful references

  • Beaujean, Freeman, Youngstrom & Carlson (2012). The structure of cognitive abilities in youths with manic symptoms: a factorial invariance study. Assessment, 19, 462 - 471
    • Non-invariance materials were directly taken from the wonderful blogpost of Julia Rohrer. READ IT!
    • How much is invariance disregarded?
    • Protzko’s humorous preprint on what we can actually claim with measurement invariance…without good measures!
    • lavaan.ugent.be/tutorial/groups.html
    • www.structuralequations.com/
    • but also some controversies
    • … more controversies

References